Goto

Collaborating Authors

 residual point


Self-adaptive weighting and sampling for physics-informed neural networks

Chen, Wenqian, Howard, Amanda, Stinis, Panos

arXiv.org Machine Learning

Physics-informed deep learning has emerged as a promising framework for solving partial differential equations (PDEs). Nevertheless, training these models on complex problems remains challenging, often leading to limited accuracy and efficiency. In this work, we introduce a hybrid adaptive sampling and weighting method to enhance the performance of physics-informed neural networks (PINNs). The adaptive sampling component identifies training points in regions where the solution exhibits rapid variation, while the adaptive weighting component balances the convergence rate across training points. Numerical experiments show that applying only adaptive sampling or only adaptive weighting is insufficient to consistently achieve accurate predictions, particularly when training points are scarce. Since each method emphasizes different aspects of the solution, their effectiveness is problem dependent. By combining both strategies, the proposed framework consistently improves prediction accuracy and training efficiency, offering a more robust approach for solving PDEs with PINNs.


An Imbalanced Learning-based Sampling Method for Physics-informed Neural Networks

Luo, Jiaqi, Yang, Yahong, Yuan, Yuan, Xu, Shixin, Hao, Wenrui

arXiv.org Machine Learning

This paper introduces Residual-based Smote (RSmote), an innovative local adaptive sampling technique tailored to improve the performance of Physics-Informed Neural Networks (PINNs) through imbalanced learning strategies. Traditional residual-based adaptive sampling methods, while effective in enhancing PINN accuracy, often struggle with efficiency and high memory consumption, particularly in high-dimensional problems. RSmote addresses these challenges by targeting regions with high residuals and employing oversampling techniques from imbalanced learning to refine the sampling process. Our approach is underpinned by a rigorous theoretical analysis that supports the effectiveness of RSmote in managing computational resources more efficiently. Through extensive evaluations, we benchmark RSmote against the state-of-the-art Residual-based Adaptive Distribution (RAD) method across a variety of dimensions and differential equations. The results demonstrate that RSmote not only achieves or exceeds the accuracy of RAD but also significantly reduces memory usage, making it particularly advantageous in high-dimensional scenarios. These contributions position RSmote as a robust and resource-efficient solution for solving complex partial differential equations, especially when computational constraints are a critical consideration.


Tackling the Curse of Dimensionality with Physics-Informed Neural Networks

Hu, Zheyuan, Shukla, Khemraj, Karniadakis, George Em, Kawaguchi, Kenji

arXiv.org Machine Learning

The curse-of-dimensionality taxes computational resources heavily with exponentially increasing computational cost as the dimension increases. This poses great challenges in solving high-dimensional PDEs, as Richard E. Bellman first pointed out over 60 years ago. While there has been some recent success in solving numerically partial differential equations (PDEs) in high dimensions, such computations are prohibitively expensive, and true scaling of general nonlinear PDEs to high dimensions has never been achieved. We develop a new method of scaling up physics-informed neural networks (PINNs) to solve arbitrary high-dimensional PDEs. The new method, called Stochastic Dimension Gradient Descent (SDGD), decomposes a gradient of PDEs into pieces corresponding to different dimensions and randomly samples a subset of these dimensional pieces in each iteration of training PINNs. We prove theoretically the convergence and other desired properties of the proposed method. We demonstrate in various diverse tests that the proposed method can solve many notoriously hard high-dimensional PDEs, including the Hamilton-Jacobi-Bellman (HJB) and the Schr\"{o}dinger equations in tens of thousands of dimensions very fast on a single GPU using the PINNs mesh-free approach. Notably, we solve nonlinear PDEs with nontrivial, anisotropic, and inseparable solutions in 100,000 effective dimensions in 12 hours on a single GPU using SDGD with PINNs. Since SDGD is a general training methodology of PINNs, it can be applied to any current and future variants of PINNs to scale them up for arbitrary high-dimensional PDEs.


Operator Learning Enhanced Physics-informed Neural Networks for Solving Partial Differential Equations Characterized by Sharp Solutions

Lin, Bin, Mao, Zhiping, Wang, Zhicheng, Karniadakis, George Em

arXiv.org Artificial Intelligence

Physics-informed Neural Networks (PINNs) have been shown as a promising approach for solving both forward and inverse problems of partial differential equations (PDEs). Meanwhile, the neural operator approach, including methods such as Deep Operator Network (DeepONet) and Fourier neural operator (FNO), has been introduced and extensively employed in approximating solution of PDEs. Nevertheless, to solve problems consisting of sharp solutions poses a significant challenge when employing these two approaches. To address this issue, we propose in this work a novel framework termed Operator Learning Enhanced Physics-informed Neural Networks (OL-PINN). Initially, we utilize DeepONet to learn the solution operator for a set of smooth problems relevant to the PDEs characterized by sharp solutions. Subsequently, we integrate the pre-trained DeepONet with PINN to resolve the target sharp solution problem. We showcase the efficacy of OL-PINN by successfully addressing various problems, such as the nonlinear diffusion-reaction equation, the Burgers equation and the incompressible Navier-Stokes equation at high Reynolds number. Compared with the vanilla PINN, the proposed method requires only a small number of residual points to achieve a strong generalization capability. Moreover, it substantially enhances accuracy, while also ensuring a robust training process. Furthermore, OL-PINN inherits the advantage of PINN for solving inverse problems. To this end, we apply the OL-PINN approach for solving problems with only partial boundary conditions, which usually cannot be solved by the classical numerical methods, showing its capacity in solving ill-posed problems and consequently more complex inverse problems.


Efficient Training of Physics-Informed Neural Networks with Direct Grid Refinement Algorithm

Nilabh, Shikhar, Grandia, Fidel

arXiv.org Artificial Intelligence

This research presents the development of an innovative algorithm tailored for the adaptive sampling of residual points within the framework of Physics-Informed Neural Networks (PINNs). By addressing the limitations inherent in existing adaptive sampling techniques, our proposed methodology introduces a direct mesh refinement approach that effectively ensures both computational efficiency and adaptive point placement. Verification studies were conducted to evaluate the performance of our algorithm, showcasing reasonable agreement between the model based on our novel approach and benchmark model results. Comparative analyses with established adaptive resampling techniques demonstrated the superior performance of our approach, particularly when implemented with higher refinement factor. Overall, our findings highlight the enhancement of simulation accuracy achievable through the application of our adaptive sampling algorithm for Physics-Informed Neural Networks.


Physics-Informed Neural Network Method for Parabolic Differential Equations with Sharply Perturbed Initial Conditions

Zong, Yifei, He, QiZhi, Tartakovsky, Alexandre M.

arXiv.org Artificial Intelligence

In this paper, we develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition. As an example of a parabolic problem, we consider the advection-dispersion equation (ADE) with a point (Gaussian) source initial condition. In the $d$-dimensional ADE, perturbations in the initial condition decay with time $t$ as $t^{-d/2}$, which can cause a large approximation error in the PINN solution. Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient. Finally, the PINN solution of parabolic equations is sensitive to the choice of weights in the loss function. We propose a normalized form of ADE where the initial perturbation of the solution does not decrease in amplitude and demonstrate that this normalization significantly reduces the PINN approximation error. We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods. Finally, we proposed an adaptive sampling scheme that significantly reduces the PINN solution error for the same number of the sampling (residual) points. We demonstrate the accuracy of the proposed PINN model for forward, inverse, and backward ADEs.


A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks

Wu, Chenxi, Zhu, Min, Tan, Qinyang, Kartha, Yadhu, Lu, Lu

arXiv.org Artificial Intelligence

Physics-informed neural networks (PINNs) have shown to be an effective tool for solving forward and inverse problems of partial differential equations (PDEs). PINNs embed the PDEs into the loss of the neural network, and this PDE loss is evaluated at a set of scattered residual points. The distribution of these points are highly important to the performance of PINNs. However, in the existing studies on PINNs, only a few simple residual point sampling methods have mainly been used. Here, we present a comprehensive study of two categories of sampling: non-adaptive uniform sampling and adaptive nonuniform sampling. We consider six uniform sampling, including (1) equispaced uniform grid, (2) uniformly random sampling, (3) Latin hypercube sampling, (4) Halton sequence, (5) Hammersley sequence, and (6) Sobol sequence. We also consider a resampling strategy for uniform sampling. To improve the sampling efficiency and the accuracy of PINNs, we propose two new residual-based adaptive sampling methods: residual-based adaptive distribution (RAD) and residual-based adaptive refinement with distribution (RAR-D), which dynamically improve the distribution of residual points based on the PDE residuals during training. Hence, we have considered a total of 10 different sampling methods, including six non-adaptive uniform sampling, uniform sampling with resampling, two proposed adaptive sampling, and an existing adaptive sampling. We extensively tested the performance of these sampling methods for four forward problems and two inverse problems in many setups. Our numerical results presented in this study are summarized from more than 6000 simulations of PINNs. We show that the proposed adaptive sampling methods of RAD and RAR-D significantly improve the accuracy of PINNs with fewer residual points. The results obtained in this study can also be used as a practical guideline in choosing sampling methods.


Solving Inverse Stochastic Problems from Discrete Particle Observations Using the Fokker-Planck Equation and Physics-informed Neural Networks

Chen, Xiaoli, Yang, Liu, Duan, Jinqiao, Karniadakis, George Em

arXiv.org Machine Learning

The Fokker-Planck (FP) equation governing the evolution of the probability density function (PDF) is applicable to many disciplines but it requires specification of the coefficients for each case, which can be functions of space-time and not just constants, hence requiring the development of a data-driven modeling approach. When the data available is directly on the PDF, then there exist methods for inverse problems that can be employed to infer the coefficients and thus determine the FP equation and subsequently obtain its solution. Herein, we address a more realistic scenario, where only sparse data are given on the particles' positions at a few time instants, which are not sufficient to accurately construct directly the PDF even at those times from existing methods, e.g., kernel estimation algorithms. To this end, we develop a general framework based on physics-informed neural networks (PINNs) that introduces a new loss function using the Kullback-Leibler divergence to connect the stochastic samples with the FP equation, to simultaneously learn the equation and infer the multi-dimensional PDF at all times. In particular, we consider two types of inverse problems, type I where the FP equation is known but the initial PDF is unknown, and type II in which, in addition to unknown initial PDF, the drift and diffusion terms are also unknown. In both cases, we investigate problems with either Brownian or Levy noise or a combination of both. We demonstrate the new PINN framework in detail in the one-dimensional case (1D) but we also provide results for up to 5D demonstrating that we can infer both the FP equation and} dynamics simultaneously at all times with high accuracy using only very few discrete observations of the particles.


Physics-Informed Neural Networks for Multiphysics Data Assimilation with Application to Subsurface Transport

He, QiZhi, Brajas-Solano, David, Tartakovsky, Guzel, Tartakovsky, Alexandre M.

arXiv.org Machine Learning

Data assimilation for parameter and state estimation in subsurface transport problems remains a significant challenge due to the sparsity of measurements, the heterogeneity of porous media, and the high computational cost of forward numerical models. We present a physics-informed deep neural networks (DNNs) machine learning method for estimating space-dependent hydraulic conductivity, hydraulic head, and concentration fields from sparse measurements. In this approach, we employ individual DNNs to approximate the unknown parameters (e.g., hydraulic conductivity) and states (e.g., hydraulic head and concentration) of a physical system, and jointly train these DNNs by minimizing the loss function that consists of the governing equations residuals in addition to the error with respect to measurement data. We apply this approach to assimilate conductivity, hydraulic head, and concentration measurements for joint inversion of the conductivity, hydraulic head, and concentration fields in a steady-state advection--dispersion problem. We study the accuracy of the physics-informed DNN approach with respect to data size, number of variables (conductivity and head versus conductivity, head, and concentration), DNNs size, and DNN initialization during training. We demonstrate that the physics-informed DNNs are significantly more accurate than standard data-driven DNNs when the training set consists of sparse data. We also show that the accuracy of parameter estimation increases as additional variables are inverted jointly.


PPINN: Parareal Physics-Informed Neural Network for time-dependent PDEs

Meng, Xuhui, Li, Zhen, Zhang, Dongkun, Karniadakis, George Em

arXiv.org Machine Learning

Physics-informed neural networks (PINNs) encode physical conservation laws and prior physical knowledge into the neural networks, ensuring the correct physics is represented accurately while alleviating the need for supervised learning to a great degree. While effective for relatively short-term time integration, when long time integration of the time-dependent PDEs is sought, the time-space domain may become arbitrarily large and hence training of the neural network may become prohibitively expensive. To this end, we develop a parareal physics-informed neural network (PPINN), hence decomposing a long-time problem into many independent short-time problems supervised by an inexpensive/fast coarse-grained (CG) solver. In particular, the serial CG solver is designed to provide approximate predictions of the solution at discrete times, while initiate many fine PINNs simultaneously to correct the solution iteratively. There is a two-fold benefit from training PINNs with small-data sets rather than working on a large-data set directly, i.e., training of individual PINNs with small-data is much faster, while training the fine PINNs can be readily parallelized. Consequently, compared to the original PINN approach, the proposed PPINN approach may achieve a significant speedup for long-time integration of PDEs, assuming that the CG solver is fast and can provide reasonable predictions of the solution, hence aiding the PPINN solution to converge in just a few iterations. To investigate the PPINN performance on solving time-dependent PDEs, we first apply the PPINN to solve the Burgers equation, and subsequently we apply the PPINN to solve a two-dimensional nonlinear diffusion-reaction equation. Our results demonstrate that PPINNs converge in a couple of iterations with significant speed-ups proportional to the number of time-subdomains employed.